22,110 research outputs found
Recommended from our members
Category-generation performance in Mandarin-English bilingual children
textResearch has shown that children categorize words in terms of taxonomic and slot-filler strategies. Monolingual children were thought to shift from a slot-filler to taxonomic strategy between the age of five and eight. The aim of this study is to analyze the way Mandarin-English bilingual children organize their lexical-semantic system through the use of a category-generation task that investigate taxonomic and slot-filler organizational strategies in each language. There were 53 Mandarin-English bilingual participants (between 4 and 7 years of age) included in this study. Participants were asked to name as many items as they could think of in slot-filler and taxonomic conditions in English and Mandarin. The results indicate greater performance in English than Mandarin in children who were five years or older. Four-year-old bilingual children produced comparable number of items in both slot-fill and taxonomic condition, but the five-, six-, and seven-year-old bilingual children showed greater performance in the taxonomic condition. Children performed better for the animal than the clothes category, and better for the clothes than the food category. These findings, while largely consistent with existing literature, suggest that the slot-filler to taxonomic shift may take place at an earlier age compared to monolingual children.Communication Sciences and Disorder
Exemplar-Centered Supervised Shallow Parametric Data Embedding
Metric learning methods for dimensionality reduction in combination with
k-Nearest Neighbors (kNN) have been extensively deployed in many
classification, data embedding, and information retrieval applications.
However, most of these approaches involve pairwise training data comparisons,
and thus have quadratic computational complexity with respect to the size of
training set, preventing them from scaling to fairly big datasets. Moreover,
during testing, comparing test data against all the training data points is
also expensive in terms of both computational cost and resources required.
Furthermore, previous metrics are either too constrained or too expressive to
be well learned. To effectively solve these issues, we present an
exemplar-centered supervised shallow parametric data embedding model, using a
Maximally Collapsing Metric Learning (MCML) objective. Our strategy learns a
shallow high-order parametric embedding function and compares training/test
data only with learned or precomputed exemplars, resulting in a cost function
with linear computational complexity for both training and testing. We also
empirically demonstrate, using several benchmark datasets, that for
classification in two-dimensional embedding space, our approach not only gains
speedup of kNN by hundreds of times, but also outperforms state-of-the-art
supervised embedding approaches.Comment: accepted to IJCAI201
- …